7 research outputs found

    GPU in Physics Computation: Case Geant4 Navigation

    Full text link
    General purpose computing on graphic processing units (GPU) is a potential method of speeding up scientific computation with low cost and high energy efficiency. We experimented with the particle physics simulation toolkit Geant4 used at CERN to benchmark its geometry navigation functionality on a GPU. The goal was to find out whether Geant4 physics simulations could benefit from GPU acceleration and how difficult it is to modify Geant4 code to run in a GPU. We ported selected parts of Geant4 code to C99 & CUDA and implemented a simple gamma physics simulation utilizing this code to measure efficiency. The performance of the program was tested by running it on two different platforms: NVIDIA GeForce 470 GTX GPU and a 12-core AMD CPU system. Our conclusion was that GPUs can be a competitive alternate for multi-core computers but porting existing software in an efficient way is challenging

    Energy efficiency of dynamic management of virtual cluster with heterogeneous hardware

    Get PDF
    Cloud computing is an essential part of today's computing world. Continuously increasing amount of computation with varying resource requirements is placed in large data centers. The variation among computing tasks, both in their resource requirements and time of processing, makes it possible to optimize the usage of physical hardware by applying cloud technologies. In this work, we develop a prototype system for load-based management of virtual machines in an OpenStack computing cluster. Our prototype is based on an idea of 'packing' idle virtual machines into special park servers optimized for this purpose. We evaluate the method by running real high-energy physics analysis software in an OpenStack test cluster and by simulating the same principle using the Cloudsim simulator software. The results show a clear improvement, 9-48 %, in the total energy efficiency when using our method together with resource overbooking and heterogeneous hardware.Peer reviewe

    System management in server based computing with virtualization

    Get PDF
    Tämän diplomityön tarkoitus on tutkia virtualisoinnin käyttöä palvelin-pohjaisen laskennan palveluissa. Tutkimuksen tuloksia hyödynnettiin luotaessa prototyyppiä TEKES-rahoitteiseen NETGATE-2 tutkimusprojektiin. Prototyyppi automatisoi etäkoneiden asentamista ja mahdollistaa keskitetysti ylläpidettyjen palvelujen jakelun. Erilaisten palveluiden asentaminen ja konfigurointi edellyttää paljon työtä ja tietämystä. Usein tämä työ on toisteista, kun samaa palvelua asennetaan useaan kohteeseen ja useaan otteeseen. Isossa organisaatiossa toisteisuudesta voi syntyä suurikin kustannuserä. Pienemmissä organisaatioissa eivät resurssit puolestaan aina riitä omaan ylläpitoon tai työtä tekemään valitulla ei välttämättä ole sopivaa koulutusta. Tästä on todennäköisesti seurauksena turvaton ja epäluotettava järjestelmä. Molemmissa tapauksissa olisi hyödyllistä voida käyttää keskitettyä palvelua joko omalta tai ulkopuoliselta ylläpidolta. Ratkaisimme ongelman kehittämällä järjestelmän, joka jakaa keskitetysti hallittuja palveluita. Tämä on toteutettu sulkemalla palvelut ja niiden käyttöjärjestelmät siirrettäviin järjestelmälevykuviin. Levykuvia voidaan ajaa virtualisointi-alustan päällä virtuaalikoneina. Keskitetysti luotuja ja ylläpidettyjä levykuvia voidaan uudelleen käyttää monessa kohteessa. Virtualisoinnilla tarjotaan levykuvissa oleville käyttöjärjestelmille vakio-alusta, jolla toimia ja näin taata toimivuus. Prototyyppi käyttää Xen-virtualisointia ja Debianin pakettienhallintajärjestelmää virtuaalikonekokoelman hallintaan ja jakamiseen etäkoneisiin. Järjestelmä mahdollistaa peruskäyttöjärjestelmän, virtualisointityökalujen ja automatisoidun päivtysjärjestelmän etäasennuksen. Prototyyppimme tekee suurten järjestelmien asentamisen ja hallinnan kevyemmäksi kuin perinteiset menetelmät. Järjestelmää on testattu useassa kohteessa kuten CERN:in kirjastossa. Debianin paketinhallintajärjestelmä on osoittautunut helpoksi tavaksi käynnistää virtualisoituja palveluita ja Xen on puolestaan ollut tehokas virtualisointialusta.The purpose of this thesis is to study the use of virtualization in server-based computing services. The results of this study were used to produce a prototype for the Finnish National Technology Agency (TEKES) funded NETGATE 2 research project. The prototype makes the installation of remote machines automatic and provides a way to distribute centrally managed services. The installation and configuration of different services need a lot of work and knowledge. Many times this is repetitive as the services need to be installed in several locations and repeatedly. In a large organization this forms a significant expense. On the other hand small organizations cannot always afford an administrator and the administrating work falls to the person most capable. This results in insecure and unreliable installations. For these companies it would be beneficial to have the services ready to use without any expertise in the matter. We solved the above problem by creating a system to distribute centrally managed services. This was done by enclosing generic services with their operating systems into transportable system images. These images can then be run on top of a virtualization platform as virtual machines. Images can be centrally created and maintained and then reused in various locations. Virtualization provides the images with a standard platform on which they can be placed. The prototype uses the Xen virtualization and the Debian package management system to manage a collection of virtual machine images and distribute them according to the needs of the remote machines. The system gives the tools for automated deployment of the base operating system and virtualization tools over Internet. It also contains the tools for automated updates. In addition the system decreases the amount of administrative work and removes the need for expertise from the remote location. Our prototype makes installation and management of large systems possible with less work than conventional systems. The system has been tested in several locations such the CERN library. The Debian package management system has proven to be an easy way to start up virtualized services and Xen has been good choice for the virtualization

    The Effect of Networking Performance on High Energy Physics Computing

    Get PDF
    High Energy Physics (HEP) data analysis consists of simulating and analysing events in particle physics. In order to understand physics phenomena, one must collect and go through a very large quantity of data generated by particle accelerators and software simulations. This data analysis can be done using the cloud computing paradigm in distributed computing environment where data and computation can be located in different, geographically distant, data centres. This adds complexity and overhead to networking. In this paper, we study how the networking solution and its performance affects the efficiency and energy consumption of HEP computing. Our results indicate that higher latency both prolongs the processing time and increases the energy consumption.Peer reviewe
    corecore